Goto

Collaborating Authors

 symbol manipulation


Towards a neural architecture of language: Deep learning versus logistics of access in neural architectures for compositional processing

van der Velde, Frank

arXiv.org Artificial Intelligence

Recently, a number of articles have argued that deep learning models such as GPT could also capture key aspects of language processing in the human mind and brain. However, I will argue that these models are not suitable as neural models of human language. Firstly, because they fail on fundamental boundary conditions, such as the amount of learning they require. This would in fact imply that the mechanisms of GPT and brain language processing are fundamentally different. Secondly, because they do not possess the logistics of access needed for compositional and productive human language processing. Neural architectures could possess logistics of access based on small-world like network structures, in which processing does not consist of symbol manipulation but of controlling the flow of activation. In this view, two complementary approaches would be needed to investigate the relation between brain and cognition. Investigating learning methods could reveal how 'learned cognition' as found in deep learning could develop in the brain. However, neural architectures with logistics of access should also be developed to account for 'productive cognition' as required for natural or artificial human language processing. Later on, these approaches could perhaps be combined to see how such architectures could develop by learning and development from a simpler basis.


Deep Learning Alone Isn't Getting Us To Human-Like AI

#artificialintelligence

Of course, deep learning has made progress, but on those foundational questions, not so much; on natural language, compositionality and reasoning, which differ from the kinds of pattern recognition on which deep learning excels, these systems remain massively unreliable, exactly as you would expect from systems that rely on statistical correlations, rather than an algebra of abstraction. Minerva, the latest, greatest AI system as of this writing, with billions of "tokens" in its training, still struggles with multiplying 4-digit numbers.


What AI Can Tell Us About Intelligence

#artificialintelligence

In short, much of our understanding of the world is given by nature, with learning as a matter of fleshing out the details. There is an alternate, empiricist view which inverts this: symbolic manipulation is a rarity in nature, primarily arising as a learned capacity for communicating acquired gradually by our hominin ancestors over the last two million years. On this view, the primary cognitive capacities are non-symbolic learning abilities bound up with improving survival, such as rapidly recognizing prey, predicting their likely actions, and developing skillful responses. This assumes that the vast majority of complex cognitive abilities are acquired through a general, self-supervised learning capacity, one that acquires an intuitive world-model capable of the central features of common sense through experience. It also assumes that most of our complex cognitive capacities do not turn on symbolic manipulation; they make do, instead, with simulating various scenarios and predicting the best outcomes.


The future of deep learning, according to its pioneers

#artificialintelligence

Deep neural networks will move past their shortcomings without help from symbolic artificial intelligence, three pioneers of deep learning argue in a paper published in the July issue of the Communications of the ACM journal. In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning. Titled "Deep Learning for AI," the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems. Above: Deep learning pioneers Yoshua Bengio (left), Geoffrey Hinton (center), and Yann LeCun (right).


Pioneers of deep learning think its future is gonna be lit

#artificialintelligence

Deep neural networks will move past their shortcomings without help from symbolic artificial intelligence, three pioneers of deep learning argue in a paper published in the July issue of the Communications of the ACM journal. In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning. Titled "Deep Learning for AI," the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems. Deep learning is often compared to the brains of humans and animals.


The future of deep learning, according to its pioneers - Report Door

#artificialintelligence

Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out. Deep neural networks will move past their shortcomings without help from symbolic artificial intelligence, three pioneers of deep learning argue in a paper published in the July issue of the Communications of the ACM journal. In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning.


The future of deep learning, according to its pioneers

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Deep neural networks will move past their shortcomings without help from symbolic artificial intelligence, three pioneers of deep learning argue in a paper published in the July issue of the Communications of the ACM journal. In their paper, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, recipients of the 2018 Turing Award, explain the current challenges of deep learning and how it differs from learning in humans and animals. They also explore recent advances in the field that might provide blueprints for the future directions for research in deep learning. Titled "Deep Learning for AI," the paper envisions a future in which deep learning models can learn with little or no help from humans, are flexible to changes in their environment, and can solve a wide range of reflexive and cognitive problems.


Everything you need to know about artificial general intelligence

#artificialintelligence

The workshop marked the official beginning of AI history. But as the two-month effort--and many others that followed--only proved that human intelligence is very complicated, and the complexity becomes more evident as you try to replicate it. That is why, despite six decades of research and development, we still don't have AI that rivals the cognitive abilities of a human child, let alone one that can think like an adult. What we do have, however, is a field of science that is split into two different categories: artificial narrow intelligence (ANI), what we have today, and artificial general intelligence (AGI), what we hope to achieve. Defining artificial general intelligence is very difficult.


What is artificial general intelligence (general AI/AGI)?

#artificialintelligence

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. From ancient mythology to modern science fiction, humans have been dreaming of creating artificial intelligence for millennia. But the endeavor of synthesizing intelligence only began in earnest in the late 1950s, when a dozen scientists gathered in Dartmouth College, NH, for a two-month workshop to create machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves." The workshop marked the official beginning of AI history. But as the two-month effort--and many others that followed--only proved that human intelligence is very complicated, and the complexity becomes more evident as you try to replicate it.


Everything you need to know about narrow AI

#artificialintelligence

In 1956, a group of scientists led by John McCarthy, a young assistant-professor of mathematics, gathered at the Dartmouth College, NH, for an ambitious six-week project: Creating computers that could "use language, form abstractions, and concepts, solve kinds of problems now reserved for humans, and improve themselves." The project kickstarted the field that has become known as artificial intelligence (AI). At the time, the scientists thought that a "2-month, 10-man study of artificial intelligence" would solve the biggest part of the AI equation. "We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer," the first AI proposal read. We still don't have thinking machines that can think and solve problems like a human child, let alone an adult.